基于时空的图(STMAP)方法显示出为车辆轨迹重建处理高角度视频的巨大潜力,可以满足各种数据驱动的建模和模仿学习应用的需求。在本文中,我们开发了时空深嵌入(STDE)模型,该模型在像素和实例水平上施加了平等约束,以生成用于STMAP上车辆条纹分割的实例感知嵌入。在像素级别上,每个像素在不同范围的8-邻居像素进行编码,随后使用该编码来指导神经网络学习嵌入机制。在实例级别上,歧视性损耗函数被设计为将属于同一实例的像素更接近,并将不同实例的平均值分开。然后,通过静脉 - 沃特算法算法优化时空亲和力的输出,以获得最终的聚类结果。基于分割指标,我们的模型优于其他五个用于STMAP处理的基线,并在阴影,静态噪声和重叠的影响下显示出稳健性。该设计的模型用于处理所有公共NGSIM US-101视频,以生成完整的车辆轨迹,表明具有良好的可扩展性和适应性。最后但并非最不重要的一点是,讨论了带有STDE和未来方向的扫描线方法的优势。代码,STMAP数据集和视频轨迹在在线存储库中公开可用。 github链接:shorturl.at/jklt0。
translated by 谷歌翻译
在本文中,我们使用两个无监督的学习算法的组合介绍了路边激光雷达物体检测的解决方案。 3D点云数据首先将球形坐标转换成球形坐标并使用散列函数填充到方位角网格矩阵中。之后,RAW LIDAR数据被重新排列成空间 - 时间数据结构,以存储范围,方位角和强度的信息。基于强度信道模式识别,应用动态模式分解方法将点云数据分解成低级背景和稀疏前景。三角算法根据范围信息,自动发现分割值以将移动目标与静态背景分开。在强度和范围背景减法之后,将使用基于密度的检测器检测到前景移动物体,并编码到状态空间模型中以进行跟踪。所提出的模型的输出包括车辆轨迹,可以实现许多移动性和安全应用。该方法针对商业流量数据收集平台进行了验证,并证明了对基础设施激光雷达对象检测的高效可靠的解决方案。与之前的方法相比,该方法直接处理散射和离散点云,所提出的方法可以建立3D测量数据的复杂线性关系较小,这捕获了我们经常需要的空间时间结构。
translated by 谷歌翻译
本文提出了一种机器学习增强的纵向扫描线方法,用于从大角度交通摄像机中提取车辆轨迹。通过将空间颞映射(STMAP)分解到稀疏前景和低秩背景,应用动态模式分解(DMD)方法来提取车辆股线。通过调整两个普遍的深度学习架构,设计了一个名为Res-Unet +的深神经网络。 RES-UNET +神经网络显着提高了基于STMAP的车辆检测的性能,DMD模型提供了许多有趣的见解,了解由Stmap保留的潜在空间结构的演变。与先前的图像处理模型和主流语义分割深神经网络进行比较模型输出。经过彻底的评估后,证明该模型对许多具有挑战性的因素来说是准确和强大的。最后但并非最不重要的是,本文从根本上解决了NGSIM轨迹数据中发现了许多质量问题。清除清洁的高质量轨迹数据,以支持交通流量和微观车辆控制的未来理论和建模研究。该方法是用于基于视频的轨迹提取的可靠解决方案,并且具有广泛的适用性。
translated by 谷歌翻译
Managing novelty in perception-based human activity recognition (HAR) is critical in realistic settings to improve task performance over time and ensure solution generalization outside of prior seen samples. Novelty manifests in HAR as unseen samples, activities, objects, environments, and sensor changes, among other ways. Novelty may be task-relevant, such as a new class or new features, or task-irrelevant resulting in nuisance novelty, such as never before seen noise, blur, or distorted video recordings. To perform HAR optimally, algorithmic solutions must be tolerant to nuisance novelty, and learn over time in the face of novelty. This paper 1) formalizes the definition of novelty in HAR building upon the prior definition of novelty in classification tasks, 2) proposes an incremental open world learning (OWL) protocol and applies it to the Kinetics datasets to generate a new benchmark KOWL-718, 3) analyzes the performance of current state-of-the-art HAR models when novelty is introduced over time, 4) provides a containerized and packaged pipeline for reproducing the OWL protocol and for modifying for any future updates to Kinetics. The experimental analysis includes an ablation study of how the different models perform under various conditions as annotated by Kinetics-AVA. The protocol as an algorithm for reproducing experiments using the KOWL-718 benchmark will be publicly released with code and containers at https://github.com/prijatelj/human-activity-recognition-in-an-open-world. The code may be used to analyze different annotations and subsets of the Kinetics datasets in an incremental open world fashion, as well as be extended as further updates to Kinetics are released.
translated by 谷歌翻译
This paper presents a novel approach to the acquisition of language models from corpora. The framework builds on Cobweb, an early system for constructing taxonomic hierarchies of probabilistic concepts that used a tabular, attribute-value encoding of training cases and concepts, making it unsuitable for sequential input like language. In response, we explore three new extensions to Cobweb -- the Word, Leaf, and Path variants. These systems encode each training case as an anchor word and surrounding context words, and they store probabilistic descriptions of concepts as distributions over anchor and context information. As in the original Cobweb, a performance element sorts a new instance downward through the hierarchy and uses the final node to predict missing features. Learning is interleaved with performance, updating concept probabilities and hierarchy structure as classification occurs. Thus, the new approaches process training cases in an incremental, online manner that it very different from most methods for statistical language learning. We examine how well the three variants place synonyms together and keep homonyms apart, their ability to recall synonyms as a function of training set size, and their training efficiency. Finally, we discuss related work on incremental learning and directions for further research.
translated by 谷歌翻译
The evaluation of abstractive summarization models typically uses test data that is identically distributed as training data. In real-world practice, documents to be summarized may contain input noise caused by text extraction artifacts or data pipeline bugs. The robustness of model performance under distribution shift caused by such noise is relatively under-studied. We present a large empirical study quantifying the sometimes severe loss in performance (up to 12 ROUGE-1 points) from different types of input noise for a range of datasets and model sizes. We then propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any extra training, auxiliary models, or even prior knowledge of the type of noise. Our proposed approach effectively mitigates the loss in performance, recovering a large fraction of the performance drop, sometimes as large as 11 ROUGE-1 points.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Text-guided image editing can have a transformative impact in supporting creative applications. A key challenge is to generate edits that are faithful to input text prompts, while consistent with input images. We present Imagen Editor, a cascaded diffusion model built, by fine-tuning Imagen on text-guided image inpainting. Imagen Editor's edits are faithful to the text prompts, which is accomplished by using object detectors to propose inpainting masks during training. In addition, Imagen Editor captures fine details in the input image by conditioning the cascaded pipeline on the original high resolution image. To improve qualitative and quantitative evaluation, we introduce EditBench, a systematic benchmark for text-guided image inpainting. EditBench evaluates inpainting edits on natural and generated images exploring objects, attributes, and scenes. Through extensive human evaluation on EditBench, we find that object-masking during training leads to across-the-board improvements in text-image alignment -- such that Imagen Editor is preferred over DALL-E 2 and Stable Diffusion -- and, as a cohort, these models are better at object-rendering than text-rendering, and handle material/color/size attributes better than count/shape attributes.
translated by 谷歌翻译
This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems. Building on expertise that began with cooling Google's data centers more efficiently, we recently conducted live experiments on two real-world facilities in partnership with Trane Technologies, a building management system provider. These live experiments had a variety of challenges in areas such as evaluation, learning from offline data, and constraint satisfaction. Our paper describes these challenges in the hope that awareness of them will benefit future applied RL work. We also describe the way we adapted our RL system to deal with these challenges, resulting in energy savings of approximately 9% and 13% respectively at the two live experiment sites.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译